365 research outputs found

    L1-optimal linear programming estimatorfor periodic frontier functions with Holder continuous derivative

    Get PDF
    We propose a new estimator based on a linear programming method for smooth frontiers of sample points. The derivative of the frontier function is supposed to be Holder continuous.The estimator is defined as a linear combination of kernel functions being sufficiently regular, covering all the points and whose associated support is of smallest surface. The coefficients of the linear combination are computed by solving a linear programming problem. The L1- error between the estimated and the true frontier functionsis shown to be almost surely converging to zero, and the rate of convergence is proved to be optimal.Comment: arXiv admin note: text overlap with arXiv:1103.591

    A Pickands type estimator of the extreme value index

    Full text link
    One of the main goal of extreme value analysis is to estimate the probability of rare events given a sample from an unknown distribution. The upper tail behavior of this distribution is described by the extreme value index. We present a new estimator of the extreme value index adapted to any domain of attraction. Its construction is similar to the one of Pickands' estimator. its weak consistency and its asymptotic distribution are established and a bias reduction method is proposed. Our estimator is compared with classical extreme value index estimators through a simulation study

    Bivariate copulas defined from matrices

    Get PDF
    We propose a semiparametric family of copulas based on a set of orthonormal functions and a matrix. This new copula permits to reach values of Spearman's Rho arbitrarily close to one without introducing a singular component. Moreover, it encompasses several extensions of FGM copulas as well as copulas based on partition of unity such as Bernstein or checkerboard copulas. Finally, it is also shown that projection of arbitrary densities of copulas onto tensor product bases can enter our framework

    Sub-Weibull distributions: generalizing sub-Gaussian and sub-Exponential properties to heavier-tailed distributions

    Get PDF
    We propose the notion of sub-Weibull distributions, which are characterised by tails lighter than (or equally light as) the right tail of a Weibull distribution. This novel class generalises the sub-Gaussian and sub-Exponential families to potentially heavier-tailed distributions. Sub-Weibull distributions are parameterized by a positive tail index θ\theta and reduce to sub-Gaussian distributions for θ=1/2\theta=1/2 and to sub-Exponential distributions for θ=1\theta=1. A characterisation of the sub-Weibull property based on moments and on the moment generating function is provided and properties of the class are studied. An estimation procedure for the tail parameter is proposed and is applied to an example stemming from Bayesian deep learning.Comment: 10 pages, 3 figure

    Collaborative Sliced Inverse Regression

    Get PDF
    International audienceSliced Inverse Regression (SIR) is an effective method for dimensionality reduction in high-dimensional regression problems. However, the method has requirements on the distribution of the predictors that are hard to check since they depend on unobserved variables. It has been shown that, if the distribution of the predictors is elliptical, then these requirements are satisfied.In case of mixture models, the ellipticity is violated and in addition there is no assurance of a single underlying regression model among the different components. Our approach clusterizes the predictors space to force the condition to hold on each cluster and includes a merging technique to look for different underlying models in the data. A study on simulated data as well as two real applications are provided. It appears that SIR, unsurprisingly, is not capable of dealing with a mixture of Gaussians involving different underlying models whereas our approach is able to correctly investigate the mixture

    Algebraic properties of copulas defined from matrices

    Get PDF
    International audienceWe propose a new family of copulas, defined by: S_{\phi}(u,v)= \;^t\phi(u) A\phi(v),\;\; (u,v)\in[0,1]^2, where ϕ\phi is a function from [0,1][0,1] to Rp{\mathbb R}^p and AA is a p×pp\times p matrix. Let us remark that if p=2p=2 and AA is a diagonal matrix, then SϕS_\phi reduces to the family proposed in~\cite{Amblard05}. As a consequence, SϕS_\phi can be seen as an extension of this former family to arbitrary matrices. First, we shall give sufficient conditions on AA and ϕ\phi to obtain copulas. Then, we shall establish the dependence and symmetry properties of this family of copulas. Finally, we shall study the stability properties of SϕS_\phi with respect to the operator * (presented for instance in~\cite{Nelsen99}, p. 194) as well as other algebraic properties

    Comparison of Weibull tail-coefficient estimators

    Get PDF
    International audienceWe address the problem of estimating the Weibull tail-coefficient which is the regular variation exponent of the inverse failure rate function. We propose a family of estimators of this coefficient and an associate extreme quantile estimator. Their asymptotic normality are established and their asymptotic mean-square errors are compared. The results are illustrated on some finite sample situation

    Parsimonious Gaussian Process Models for the Classification of Multivariate Remote Sensing Images

    Get PDF
    International audienceA family of parsimonious Gaussian process models is presented. They allow to construct a Gaussian mixture model in a kernel feature space by assuming that the data of each class live in a specific subspace. The proposed models are used to build a kernel Markov random field (pGPMRF), which is applied to classify the pixels of a real multivariate remotely sensed image. In terms of classification accuracy, some of the proposed models perform equivalently to a SVM but they perform better than another kernel Gaussian mixture model previously defined in the literature. The pGPMRF provides the best classification accuracy thanks to the spatial regularization

    Weighted least-squares inference for multivariate copulas based on dependence coefficients

    Get PDF
    l'auteur Gildas Mazo actuellement à l'INRA - Centre de Jouy-en-Josas - Unité MaIAGEInternational audienceIn this paper, we address the issue of estimating the parameters of general multivariate copulas, that is, copulas whose partial derivatives may not exist. To this aim, we consider a weighted least-squares estimator based on dependence coefficients, and establish its consistency and asymptotic normality. The estimator's performance on finite samples is illustrated on simulations and a real dataset

    Regularization methods for Sliced Inverse Regression

    Get PDF
    International audienceSliced Inverse Regression (SIR) is an effective method for dimension reduction in high-dimensional regression problems. The original method, however, requires the inversion of the predictors covariance matrix. In case of collinearity between these predictors or small sample sizes compared to the dimension, the inversion is not possible and a regularization technique has to be used. Our approach is based on a Fisher Lecture given by R.D. Cook where it is shown that SIR axes can be interpreted as solutions of an inverse regression problem. We propose to introduce a Gaussian prior distribution on the unknown parameters of the inverse regression problem in order to regularize their estimation. We show that some existing SIR regularizations can enter our framework, which permits a global understanding of these methods. Three new priors are proposed leading to new regularizations of the SIR method. A comparison on simulated data as well as an application to the estimation of Mars surface physical properties from hyperspectral images are provided
    corecore